Goto

Collaborating Authors

 adaptive intervention


Statistical Reinforcement Learning in the Real World: A Survey of Challenges and Future Directions

Gazi, Asim H., Guo, Yongyi, Gao, Daiqi, Xu, Ziping, Zhang, Kelly W., Murphy, Susan A.

arXiv.org Machine Learning

Reinforcement learning (RL) has achieved remarkable success in real-world decision-making across diverse domains, including gaming, robotics, online advertising, public health, and natural language processing. Despite these advances, a substantial gap remains between RL research and its deployment in many practical settings. Two recurring challenges often underlie this gap. First, many settings offer limited opportunity for the agent to interact extensively with the target environment due to practical constraints. Second, many target environments often undergo substantial changes, requiring redesign and redeployment of RL systems (e.g., advancements in science and technology that change the landscape of healthcare delivery). Addressing these challenges and bridging the gap between basic research and application requires theory and methodology that directly inform the design, implementation, and continual improvement of RL systems in real-world settings. In this paper, we frame the application of RL in practice as a three-component process: (i) online learning and optimization during deployment, (ii) post- or between-deployment offline analyses, and (iii) repeated cycles of deployment and redeployment to continually improve the RL system. We provide a narrative review of recent advances in statistical RL that address these components, including methods for maximizing data utility for between-deployment inference, enhancing sample efficiency for online learning within-deployment, and designing sequences of deployments for continual improvement. We also outline future research directions in statistical RL that are use-inspired -- aiming for impactful application of RL in practice.


Enhancing Adaptive Behavioral Interventions with LLM Inference from Participant-Described States

Karine, Karine, Marlin, Benjamin M.

arXiv.org Artificial Intelligence

The use of reinforcement learning (RL) methods to support health behavior change via personalized and just-in-time adaptive interventions is of significant interest to health and behavioral science researchers focused on problems such as smoking cessation support and physical activity promotion. However, RL methods are often applied to these domains using a small collection of context variables to mitigate the significant data scarcity issues that arise from practical limitations on the design of adaptive intervention trials. In this paper, we explore an approach to significantly expanding the state space of an adaptive intervention without impacting data efficiency. The proposed approach enables intervention participants to provide natural language descriptions of aspects of their current state. It then leverages inference with pre-trained large language models (LLMs) to better align the policy of a base RL method with these state descriptions. To evaluate our method, we develop a novel physical activity intervention simulation environment that generates text-based state descriptions conditioned on latent state variables using an auxiliary LLM. We show that this approach has the potential to significantly improve the performance of online policy learning methods.


Q-Learning with Clustered-SMART (cSMART) Data: Examining Moderators in the Construction of Clustered Adaptive Interventions

Song, Yao, Speth, Kelly, Kilbourne, Amy, Quanbeck, Andrew, Almirall, Daniel, Wang, Lu

arXiv.org Machine Learning

A clustered adaptive intervention (cAI) is a pre-specified sequence of decision rules that guides practitioners on how best - and based on which measures - to tailor cluster-level intervention to improve outcomes at the level of individuals within the clusters. A clustered sequential multiple assignment randomized trial (cSMART) is a type of trial that is used to inform the empirical development of a cAI. The most common type of secondary aim in a cSMART focuses on assessing causal effect moderation by candidate tailoring variables. We introduce a clustered Q-learning framework with the M-out-of-N Cluster Bootstrap using data from a cSMART to evaluate whether a set of candidate tailoring variables may be useful in defining an optimal cAI. This approach could construct confidence intervals (CI) with near-nominal coverage to assess parameters indexing the causal effect moderation function. Specifically, it allows reliable inferences concerning the utility of candidate tailoring variables in constructing a cAI that maximizes a mean end-of-study outcome even when "non-regularity", a well-known challenge exists. Simulations demonstrate the numerical performance of the proposed method across varying non-regularity conditions and investigate the impact of varying number of clusters and intra-cluster correlation coefficient on CI coverage. Methods are applied on ADEPT dataset to inform the construction of a clinic-level cAI for improving evidence-based practice in treating mood disorders.


Combining LLM decision and RL action selection to improve RL policy for adaptive interventions

Karine, Karine, Marlin, Benjamin M.

arXiv.org Artificial Intelligence

Reinforcement learning (RL) is increasingly being used in the healthcare domain, particularly for the development of personalized health adaptive interventions. Inspired by the success of Large Language Models (LLMs), we are interested in using LLMs to update the RL policy in real time, with the goal of accelerating personalization. We use the text-based user preference to influence the action selection on the fly, in order to immediately incorporate the user preference. We use the term "user preference" as a broad term to refer to a user personal preference, constraint, health status, or a statement expressing like or dislike, etc. Our novel approach is a hybrid method that combines the LLM response and the RL action selection to improve the RL policy. Given an LLM prompt that incorporates the user preference, the LLM acts as a filter in the typical RL action selection. We investigate different prompting strategies and action selection strategies. To evaluate our approach, we implement a simulation environment that generates the text-based user preferences and models the constraints that impact behavioral dynamics. We show that our approach is able to take into account the text-based user preferences, while improving the RL policy, thus improving personalization in adaptive intervention.


The Digital Transformation in Health: How AI Can Improve the Performance of Health Systems

Periáñez, África, del Río, Ana Fernández, Nazarov, Ivan, Jané, Enric, Hassan, Moiz, Rastogi, Aditya, Tang, Dexian

arXiv.org Artificial Intelligence

Mobile health has the potential to revolutionize health care delivery and patient engagement. In this work, we discuss how integrating Artificial Intelligence into digital health applications-focused on supply chain, patient management, and capacity building, among other use cases-can improve the health system and public health performance. We present an Artificial Intelligence and Reinforcement Learning platform that allows the delivery of adaptive interventions whose impact can be optimized through experimentation and real-time monitoring. The system can integrate multiple data sources and digital health applications. The flexibility of this platform to connect to various mobile health applications and digital devices and send personalized recommendations based on past data and predictions can significantly improve the impact of digital tools on health system outcomes. The potential for resource-poor settings, where the impact of this approach on health outcomes could be more decisive, is discussed specifically. This framework is, however, similarly applicable to improving efficiency in health systems where scarcity is not an issue.


StepCountJITAI: simulation environment for RL with application to physical activity adaptive intervention

Karine, Karine, Marlin, Benjamin M.

arXiv.org Artificial Intelligence

The use of reinforcement learning (RL) to learn policies for just-in-time adaptive interventions (JITAIs) is of significant interest in many behavioral intervention domains including improving levels of physical activity. In a messaging-based physical activity JITAI, a mobile health app is typically used to send messages to a participant to encourage engagement in physical activity. In this setting, RL methods can be used to learn what intervention options to provide to a participant in different contexts. However, deploying RL methods in real physical activity adaptive interventions comes with challenges: the cost and time constraints of real intervention studies result in limited data to learn adaptive intervention policies. Further, commonly used RL simulation environments have dynamics that are of limited relevance to physical activity adaptive interventions and thus shed little light on what RL methods may be optimal for this challenging application domain. In this paper, we introduce StepCountJITAI, an RL environment designed to foster research on RL methods that address the significant challenges of policy learning for adaptive behavioral interventions.


Adaptive User Journeys in Pharma E-Commerce with Reinforcement Learning: Insights from SwipeRx

del Río, Ana Fernández, Leong, Michael Brennan, Saraiva, Paulo, Nazarov, Ivan, Rastogi, Aditya, Hassan, Moiz, Tang, Dexian, Periáñez, África

arXiv.org Artificial Intelligence

This paper introduces a reinforcement learning (RL) platform that enhances end-to-end user journeys in healthcare digital tools through personalization. We explore a case study with SwipeRx, the most popular all-in-one app for pharmacists in Southeast Asia, demonstrating how the platform can be used to personalize and adapt user experiences. Our RL framework is tested through a series of experiments with product recommendations tailored to each pharmacy based on real-time information on their purchasing history and in-app engagement, showing a significant increase in basket size. By integrating adaptive interventions into existing mobile health solutions and enriching user journeys, our platform offers a scalable solution to improve pharmaceutical supply chain management, health worker capacity building, and clinical decision and patient care, ultimately contributing to better healthcare outcomes.


Adaptive Behavioral AI: Reinforcement Learning to Enhance Pharmacy Services

del Río, Ana Fernández, Leong, Michael Brennan, Saraiva, Paulo, Nazarov, Ivan, Rastogi, Aditya, Hassan, Moiz, Tang, Dexian, Periáñez, África

arXiv.org Artificial Intelligence

Pharmacies are critical in healthcare systems, particularly in low- and middle-income countries. Procuring pharmacists with the right behavioral interventions or nudges can enhance their skills, public health awareness, and pharmacy inventory management, ensuring access to essential medicines that ultimately benefit their patients. We introduce a reinforcement learning operational system to deliver personalized behavioral interventions through mobile health applications. We illustrate its potential by discussing a series of initial experiments run with SwipeRx, an all-in-one app for pharmacists, including B2B e-commerce, in Indonesia. The proposed method has broader applications extending beyond pharmacy operations to optimize healthcare delivery.


Adaptive Interventions with User-Defined Goals for Health Behavior Change

Mandyam, Aishwarya, Joerke, Matthew, Engelhardt, Barbara E., Brunskill, Emma

arXiv.org Artificial Intelligence

Physical inactivity remains a major public health concern, having associations with adverse health outcomes such as cardiovascular disease and type-2 diabetes. Mobile health applications present a promising avenue for low-cost, scalable physical activity promotion, yet often suffer from small effect sizes and low adherence rates, particularly in comparison to human coaching. Goal-setting is a critical component of health coaching that has been underutilized in adaptive algorithms for mobile health interventions. This paper introduces a modification to the Thompson sampling algorithm that places emphasis on individualized goal-setting by optimizing personalized reward functions. As a step towards supporting goal-setting, this paper offers a balanced approach that can leverage shared structure while optimizing individual preferences and goals. We prove that our modification incurs only a constant penalty on the cumulative regret while preserving the sample complexity benefits of data sharing. In a physical activity simulator, we demonstrate that our algorithm achieves substantial improvements in cumulative regret compared to baselines that do not share data or do not optimize for individualized rewards.


Adaptive interventions for both accuracy and time in AI-assisted human decision making

Swaroop, Siddharth, Buçinca, Zana, Doshi-Velez, Finale

arXiv.org Artificial Intelligence

In settings where users are both time-pressured and need high accuracy, such as doctors working in Emergency Rooms, we want to provide AI assistance that both increases accuracy and reduces time. However, different types of AI assistance have different benefits: some reduce time taken while increasing overreliance on AI, while others do the opposite. We therefore want to adapt what AI assistance we show depending on various properties (of the question and of the user) in order to best tradeoff our two objectives. We introduce a study where users have to prescribe medicines to aliens, and use it to explore the potential for adapting AI assistance. We find evidence that it is beneficial to adapt our AI assistance depending on the question, leading to good tradeoffs between time taken and accuracy. Future work would consider machine-learning algorithms (such as reinforcement learning) to automatically adapt quickly.